Goto

Collaborating Authors

 weapon system


The State of AI: How war will be changed forever

MIT Technology Review

In this conversation, Helen Warrell, FT investigations reporter and former defense and security editor, and James O'Donnell, MIT Technology Review's senior AI reporter, consider the ethical quandaries and financial incentives around AI's use by the military. Welcome back to, a new collaboration between the Financial Times and MIT Technology Review. In this conversation, Helen Warrell, investigations reporter and former defense and security editor, and James O'Donnell, 's senior AI reporter, consider the ethical quandaries and financial incentives around AI's use by the military. It is July 2027, and China is on the brink of invading Taiwan. Autonomous drones with AI targeting capabilities are primed to overpower the island's air defenses as a series of crippling AI-generated cyberattacks cut off energy supplies and key communications. In the meantime, a vast disinformation campaign enacted by an AI-powered pro-Chinese meme farm spreads across global social media, deadening the outcry at Beijing's act of aggression.


Military AI Needs Technically-Informed Regulation to Safeguard AI Research and its Applications

Simmons-Edler, Riley, Dong, Jean, Lushenko, Paul, Rajan, Kanaka, Badman, Ryan P.

arXiv.org Artificial Intelligence

Military weapon systems and command-and-control infrastructure augmented by artificial intelligence (AI) have seen rapid development and deployment in recent years. However, the sociotechnical impacts of AI on combat systems, military decision-making, and the norms of warfare have been understudied. We focus on a specific subset of lethal autonomous weapon systems (LAWS) that use AI for targeting or battlefield decisions. We refer to this subset as AI-powered lethal autonomous weapon systems (AI-LAWS) and argue that they introduce novel risks -- including unanticipated escalation, poor reliability in unfamiliar environments, and erosion of human oversight -- all of which threaten both military effectiveness and the openness of AI research. These risks cannot be addressed by high-level policy alone; effective regulation must be grounded in the technical behavior of AI models. We argue that AI researchers must be involved throughout the regulatory lifecycle. Thus, we propose a clear, behavior-based definition of AI-LAWS -- systems that introduce unique risks through their use of modern AI -- as a foundation for technically grounded regulation, given that existing frameworks do not distinguish them from conventional LAWS. Using this definition, we propose several technically-informed policy directions and invite greater participation from the AI research community in military AI policy discussions.


US to test nuclear missile TODAY to show 'readiness' amid arms race fears

Daily Mail - Science & tech

Daughter of man infamously shot by Dick Cheney gives VERY sarcastic tribute to the Daily Mail after former VP died aged 84... and reveals what really happened Why Tuesday's races aren't as close as you think: White House analyst CRAIG KESHISHIAN reveals what the polls always miss Dick Cheney dead: Vice President who served with George W. Bush and took leading role in'war on terror' dies at 84 It's the trendy new diagnosis for everything from fatigue to brain fog... but here's the truth about your gut problem - and how to fix it: DR EMILY LEEMING How Charles reacted when'difficult' William asked if he could do fewer engagements: Biographer ROBERT JOBSON reveals the'tension', secrets of Kate's family life - and how couple are making Prince George'strong' Sydney Sweeney sparks outrage after starring on GQ's Men Of The Year cover... as she breaks silence on American Eagle ad Rapper Young Bleed dead at 51 after suffering brain aneurysm as son reflects on'legend' father'She used it to freshen up... it killed her': My wife died of cancer at 63. She never smoked or drank. My ex is the international fugitive Democrat who's fled to Europe with our nine-year-old son. Here are the disgraceful secrets she'd hate the world to know Woke Gen-Z's revenge: Poll reveals staggering number of under-30s who back Mamdani while their parents are terrified of a return to 1980s New York Erika Kirk reveals her 3-year-old daughter's heartbreaking question in first interview since husband Charlie's assassination Father reveals'radical faith' spiral of American son killed in hail of arrows by reclusive tribe as new believers consider following him to isolated island Revealed: Football manager who died mid-match, leaving players in disbelief on the pitch, 'complained about fish he had eaten' hours before heart attack - as devastated star speaks out Dallas Cowboys agree huge trade for rival's defensive captain just hours before the NFL deadline US to test nuclear missile TODAY to show'readiness' amid arms race fears Warren Buffett's $6billion stock exit is his loudest warning yet AMANDA PLATELL: Fergie's delusions have reached a new low. I can't believe Beatrice and Eugenie are egging her on.


Foreign nationals flying drones over US military sites raises 'espionage' concern: expert

FOX News

Federal officials face a looming threat of foreign nationals utilizing drones to surveil United States military bases after two recent arrests and a string of mysterious incursions suggest the country's airspace is ill-equipped to handle the rapidly evolving technology. In late 2024, the Department of Justice announced charges against Yinpiao Zhou, 39, for allegedly flying a drone over Vandenberg Space Force Base in California and taking photos of the facility. The Chinese-American citizen was detained as he attempted to board a China-bound flight and was charged with violation of national defense airspace and failure to register an aircraft. "Anyone operating a drone over a restricted space, like a military base, would be subject to prosecution," Ken Gray, a former FBI agent and military analyst, told Fox News Digital. "A foreign national operating [a drone] raises a concern about that person being involved in some type of espionage or intelligence gathering."


Balancing Power and Ethics: A Framework for Addressing Human Rights Concerns in Military AI

Islam, Mst Rafia, Wasi, Azmine Toushik

arXiv.org Artificial Intelligence

AI has made significant strides recently, leading to various applications in both civilian and military sectors. The military sees AI as a solution for developing more effective and faster technologies. While AI offers benefits like improved operational efficiency and precision targeting, it also raises serious ethical and legal concerns, particularly regarding human rights violations. Autonomous weapons that make decisions without human input can threaten the right to life and violate international humanitarian law. To address these issues, we propose a three-stage framework (Design, In Deployment, and During/After Use) for evaluating human rights concerns in the design, deployment, and use of military AI. Each phase includes multiple components that address various concerns specific to that phase, ranging from bias and regulatory issues to violations of International Humanitarian Law. By this framework, we aim to balance the advantages of AI in military operations with the need to protect human rights.


Spotlight Session on Autonomous Weapons Systems at ICRC 34th International Conference

Conroy, Susannah Kate

arXiv.org Artificial Intelligence

Autonomous weapons systems (AWS) change the way humans make decisions, the effect of those decisions and who is accountable for decisions made. We must remain vigilant, informed and human-centred as we tackle our deliberations on developing norms regarding their development, use and justification. Ways to enhance compliance in international humanitarian law (IHL) include: Training weapons decision makers in IHL; developing best practice in weapons reviews including requirements for industry to ensure that any new weapon, means or method of warfare is capable of being used lawfully; develop human-centred test and evaluation methods; invest in digital infrastructure to increase knowledge of the civilian environment in a conflict and its dynamics; invest in research on the real effects and consequences of civilian harms to the achievement of military and political objectives; improve secure communications between stakeholders in a conflict; and finally to upskill governments and NGOs in what is technically achievable with emerging technologies so that they can contribute to system requirements, test and evaluation protocols and operational rules of use and engagement. Governments are responsible for setting requirements for weapons systems. They are responsible for driving ethicality as well as lethality. Governments can require systems to be made and used to better protect civilians and protected objects. The UN can advocate for compliance with IHL, human rights, human-centred use of weapons systems and improved mechanisms to monitor and trace military decision making including those decisions affected by autonomous functionality.

  Country:
  Genre: Research Report (0.40)
  Industry: Government > Military (1.00)

Neuro-Symbolic AI for Military Applications

Hagos, Desta Haileselassie, Rawat, Danda B.

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) plays a significant role in enhancing the capabilities of defense systems, revolutionizing strategic decision-making, and shaping the future landscape of military operations. Neuro-Symbolic AI is an emerging approach that leverages and augments the strengths of neural networks and symbolic reasoning. These systems have the potential to be more impactful and flexible than traditional AI systems, making them well-suited for military applications. This paper comprehensively explores the diverse dimensions and capabilities of Neuro-Symbolic AI, aiming to shed light on its potential applications in military contexts. We investigate its capacity to improve decision-making, automate complex intelligence analysis, and strengthen autonomous systems. We further explore its potential to solve complex tasks in various domains, in addition to its applications in military contexts. Through this exploration, we address ethical, strategic, and technical considerations crucial to the development and deployment of Neuro-Symbolic AI in military and civilian applications. Contributing to the growing body of research, this study represents a comprehensive exploration of the extensive possibilities offered by Neuro-Symbolic AI.


AI-Powered Autonomous Weapons Risk Geopolitical Instability and Threaten AI Research

Simmons-Edler, Riley, Badman, Ryan, Longpre, Shayne, Rajan, Kanaka

arXiv.org Artificial Intelligence

The recent embrace of machine learning (ML) in the development of autonomous weapons systems (AWS) creates serious risks to geopolitical stability and the free exchange of ideas in AI research. This topic has received comparatively little attention of late compared to risks stemming from superintelligent artificial general intelligence (AGI), but requires fewer assumptions about the course of technological development and is thus a nearer-future issue. ML is already enabling the substitution of AWS for human soldiers in many battlefield roles, reducing the upfront human cost, and thus political cost, of waging offensive war. In the case of peer adversaries, this increases the likelihood of "low intensity" conflicts which risk escalation to broader warfare. In the case of non-peer adversaries, it reduces the domestic blowback to wars of aggression. This effect can occur regardless of other ethical issues around the use of military AI such as the risk of civilian casualties, and does not require any superhuman AI capabilities. Further, the military value of AWS raises the specter of an AI-powered arms race and the misguided imposition of national security restrictions on AI research. Our goal in this paper is to raise awareness among the public and ML researchers on the near-future risks posed by full or near-full autonomy in military technology, and we provide regulatory suggestions to mitigate these risks. We call upon AI policy experts and the defense AI community in particular to embrace transparency and caution in their development and deployment of AWS to avoid the negative effects on global stability and AI research that we highlight here.


How artificial intelligence is reshaping modern warfare

FOX News

Fox News chief national security correspondent Jennifer Griffin reports on how technology is revolutionizing modern warfare on'Special Report.' Modern warfare is changing rapidly, and harnessing artificial intelligence is key to staying ahead of America's adversaries. Software companies including Govini and Palantir are behind the production and modernization of today's most high-tech weapon systems. Both companies were at the second annual AI Expo for National Competitiveness in Washington to showcase their work to the nation's top military brass. Fox News saw first-hand this cutting-edge technology and had an exclusive interview with Palantir's CEO and co-founder Alex Karp, whose software is being used in Ukraine and the Middle East.


Why Protesters Around the World Are Demanding a Pause on AI Development

TIME - Tech

Just one week before the world's second-ever global summit on artificial intelligence, protesters of a small but growing movement called "Pause AI" demanded that the world's governments regulate AI companies and freeze the development of new cutting edge artificial intelligence models. They say that the development of these models should only be allowed to continue if companies agree to let them be thoroughly evaluated to test their safety first. Protests took place across thirteen different countries, including the U.S., the U.K, Brazil, Germany, Australia, and Norway on Monday. In London, a group of 20 or so protesters stood outside of the U.K.'s Department of Science, Innovation and Technology chanting things like "stop the race, it's not safe" and "who's future? The protestors say their goal is to get governments to regulate the companies developing frontier AI models, including OpenAI's Chat GPT. They say that companies are not taking enough precautions to make sure their AI models are safe enough to be released into the world. "[AI companies] have proven time and time again… through the way that these companies' workers are treated, with the way that they treat other people's work by literally stealing it and throwing it into their models, They have proven that they cannot be trusted," said Gideon Futerman, an Oxford undergraduate student who gave a speech at the protest. One protester, Tara Steele, a freelance writer who works on blogs and SEO content, said that she had seen the technology impact her own livelihood. "I have noticed since ChatGPT came out, the demand for freelance work has reduced dramatically," she says. "I love writing personally… I've really loved it.